Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Korean Journal of Radiology ; : 1764-1776, 2021.
Article in English | WPRIM | ID: wpr-918209

ABSTRACT

Objective@#This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. @*Materials and Methods@#We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1–10, 11–100, 101–400, > 400) was evaluated. @*Results@#In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and falsepositive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). @*Conclusion@#The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

2.
Korean Journal of Radiology ; : 660-669, 2020.
Article | WPRIM | ID: wpr-833561

ABSTRACT

Objective@#To evaluate the accuracy of a deep learning-based automated segmentation of the left ventricle (LV) myocardium using cardiac CT. @*Materials and Methods@#To develop a fully automated algorithm, 100 subjects with coronary artery disease were randomly selected as a development set (50 training / 20 validation / 30 internal test). An experienced cardiac radiologist generated the manual segmentation of the development set. The trained model was evaluated using 1000 validation set generated by an experienced technician. Visual assessment was performed to compare the manual and automatic segmentations. In a quantitative analysis, sensitivity and specificity were calculated according to the number of pixels where two three-dimensional masks of the manual and deep learning segmentations overlapped. Similarity indices, such as the Dice similarity coefficient (DSC), were used to evaluate the margin of each segmented masks. @*Results@#The sensitivity and specificity of automated segmentation for each segment (1–16 segments) were high (85.5– 100.0%). The DSC was 88.3 ± 6.2%. Among randomly selected 100 cases, all manual segmentation and deep learning masks for visual analysis were classified as very accurate to mostly accurate and there were no inaccurate cases (manual vs. deep learning: very accurate, 31 vs. 53; accurate, 64 vs. 39; mostly accurate, 15 vs. 8). The number of very accurate cases for deep learning masks was greater than that for manually segmented masks. @*Conclusion@#We present deep learning-based automatic segmentation of the LV myocardium and the results are comparable to manual segmentation data with high sensitivity, specificity, and high similarity scores.

3.
Korean Journal of Radiology ; : 1207-1215, 2019.
Article in English | WPRIM | ID: wpr-760280

ABSTRACT

OBJECTIVE: To retrospectively investigate whether tumor size assessment on multiplanar reconstruction (MPR) CT images better reflects pathologic T-stage than evaluation on axial images and evaluate the additional value of measurement in three-dimensional (3D) space. MATERIALS AND METHODS: From 1661 patients who had undergone surgical resection for primary lung cancer between June 2013 and November 2016, 210 patients (145 men; mean age, 64.4 years) were randomly selected and 30 were assigned to each pathologic T-stage. Two readers independently measured the maximal lesion diameters on MPR CT. The longest diameters on 3D were obtained using volume segmentation. T-stages determined on CT images were compared with pathologic T-stages (overall and subgroup—Group 1, T1a/b; Group 2, T1c or higher), with differences in accuracy evaluated using McNemar's test. Agreement between readers was evaluated with intraclass correlation coefficients (ICC). RESULTS: The diagnostic accuracy of MPR measurements for determining T-stage was significantly higher than that of axial measurement alone for both reader 1 (74.3% [156/210] vs. 63.8% [134/210]; p = 0.001) and reader 2 (68.1% [143/210] vs. 61.9% [130/210]; p = 0.049). In the subgroup analysis, diagnostic accuracy with MPR diameter was significantly higher than that with axial diameter in only Group 2 (p < 0.05). Inter-reader agreements for the ICCs on axial and MPR measurements were 0.98 and 0.98. The longest diameter on 3D images showed a significantly lower performance than MPR, with an accuracy of 54.8% (115/210) (p < 0.05). CONCLUSION: Size measurement on MPR CT better reflected the pathological T-stage, specifically for T1c or higher stage lung cancer. Measurements in a 3D plane showed no added value.


Subject(s)
Humans , Male , Lung Neoplasms , Lung , Multidetector Computed Tomography , Neoplasm Staging , Retrospective Studies
4.
Korean Journal of Radiology ; : 1431-1440, 2019.
Article in English | WPRIM | ID: wpr-760252

ABSTRACT

OBJECTIVE: To retrospectively assess the effect of CT slice thickness on the reproducibility of radiomic features (RFs) of lung cancer, and to investigate whether convolutional neural network (CNN)-based super-resolution (SR) algorithms can improve the reproducibility of RFs obtained from images with different slice thicknesses. MATERIALS AND METHODS: CT images with 1-, 3-, and 5-mm slice thicknesses obtained from 100 pathologically proven lung cancers between July 2017 and December 2017 were evaluated. CNN-based SR algorithms using residual learning were developed to convert thick-slice images into 1-mm slices. Lung cancers were semi-automatically segmented and a total of 702 RFs (tumor intensity, texture, and wavelet features) were extracted from 1-, 3-, and 5-mm slices, as well as the 1-mm slices generated from the 3- and 5-mm images. The stabilities of the RFs were evaluated using concordance correlation coefficients (CCCs). RESULTS: The mean CCCs for the comparisons of original 1 mm vs. 3 mm, 1 mm vs. 5 mm, and 3 mm vs. 5 mm images were 0.41, 0.27, and 0.65, respectively (p < 0.001 for all comparisons). Tumor intensity features showed the best reproducibility while wavelets showed the lowest reproducibility. The majority of RFs failed to achieve reproducibility (CCC ≥ 0.85; 3.6%, 1.0%, and 21.5%, respectively). After applying the CNN-based SR algorithms, the reproducibility significantly improved in all three pairings (mean CCCs: 0.58, 0.45, and 0.72; p < 0.001 for all comparisons). The reproducible RFs also increased (36.3%, 17.4%, and 36.9%, respectively). CONCLUSION: The reproducibility of RFs in lung cancer is significantly influenced by CT slice thickness, which can be improved by the CNN-based SR algorithms.


Subject(s)
Learning , Lung Neoplasms , Lung , Retrospective Studies
5.
Korean Journal of Radiology ; : 295-303, 2019.
Article in English | WPRIM | ID: wpr-741397

ABSTRACT

OBJECTIVE: The aim of our study was to develop and validate a convolutional neural network (CNN) architecture to convert CT images reconstructed with one kernel to images with different reconstruction kernels without using a sinogram. MATERIALS AND METHODS: This retrospective study was approved by the Institutional Review Board. Ten chest CT scans were performed and reconstructed with the B10f, B30f, B50f, and B70f kernels. The dataset was divided into six, two, and two examinations for training, validation, and testing, respectively. We constructed a CNN architecture consisting of six convolutional layers, each with a 3 × 3 kernel with 64 filter banks. Quantitative performance was evaluated using root mean square error (RMSE) values. To validate clinical use, image conversion was conducted on 30 additional chest CT scans reconstructed with the B30f and B50f kernels. The influence of image conversion on emphysema quantification was assessed with Bland–Altman plots. RESULTS: Our scheme rapidly generated conversion results at the rate of 0.065 s/slice. Substantial reduction in RMSE was observed in the converted images in comparison with the original images with different kernels (mean reduction, 65.7%; range, 29.5–82.2%). The mean emphysema indices for B30f, B50f, converted B30f, and converted B50f were 5.4 ± 7.2%, 15.3 ± 7.2%, 5.9 ± 7.3%, and 16.8 ± 7.5%, respectively. The 95% limits of agreement between B30f and other kernels (B50f and converted B30f) ranged from −14.1% to −2.6% (mean, −8.3%) and −2.3% to 0.7% (mean, −0.8%), respectively. CONCLUSION: CNN-based CT kernel conversion shows adequate performance with high accuracy and speed, indicating its potential clinical use.


Subject(s)
Dataset , Emphysema , Ethics Committees, Research , Image Processing, Computer-Assisted , Machine Learning , Multidetector Computed Tomography , Retrospective Studies , Tomography, X-Ray Computed
6.
Korean Journal of Radiology ; : 570-584, 2017.
Article in English | WPRIM | ID: wpr-118265

ABSTRACT

The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.


Subject(s)
Humans , Artificial Intelligence , Computer Systems , Delivery of Health Care , Diagnostic Imaging , Machine Learning , Neurons , Precision Medicine , Synapses
7.
Journal of the Korean Radiological Society ; : 21-26, 2007.
Article in Korean | WPRIM | ID: wpr-161829

ABSTRACT

PURPOSE: To develop an automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images, and to evaluate the accuracy and usefulness of the system. MATERIALS AND METHODS: For textural analysis, histogram features, gradient features, run length encoding, and a co-occurrence matrix were employed. A Bayesian classifier was used for automated classification. The images (image number n=256) were selected from the HRCT images obtained from 17 healthy subjects (n=67), 26 patients with bronchiolitis obliterans (n=70), 28 patients with mild centrilobular emphysema (n=65), and 21 patients with panlobular emphysema or severe centrilobular emphysema (n=63). An five-fold cross-validation method was used to assess the performance of the system. Class-specific sensitivities were analyzed and the overall accuracy of the system was assessed with kappa statistics. RESULTS: The sensitivity of the system for each class was as follows: normal lung 84.9%, bronchiolitis obliterans 83.8%, mild centrilobular emphysema 77.0%, and panlobular emphysema or severe centrilobular emphysema 95.8%. The overall performance for differentiating each disease and the normal lung was satisfactory with a kappa value of 0.779. CONCLUSION: An automated classification system for the differentiation between obstructive lung diseases based on the textural analysis of HRCT images was developed. The proposed system discriminates well between the various obstructive lung diseases and the normal lung.


Subject(s)
Humans , Bronchiolitis Obliterans , Classification , Emphysema , Lung , Lung Diseases, Obstructive , Pulmonary Emphysema
SELECTION OF CITATIONS
SEARCH DETAIL